摘要 :
Abstract Environmental policy integration (EPI), that is, the incorporation of environmental concerns in non‐environmental policy areas, has been widely adopted in public policies. However, EPI research has found much discrepancy...
展开
Abstract Environmental policy integration (EPI), that is, the incorporation of environmental concerns in non‐environmental policy areas, has been widely adopted in public policies. However, EPI research has found much discrepancy between environmental objectives and actual implementation. This paper argues that analyzing EPI in the context of policy mixes with multiple objectives, multiple instruments and their calibrations helps to better understand unavoidable tensions and limitations. We develop a framework to assess EPI at these three levels of policy output, synthesizing the EPI and policy mix literatures. We further distinguish four analytical dimensions to assess calibrations: stringency, specificity, flexibility, and temporality. A case study of the national implementation of the European Union's Common Agricultural Policy (CAP) in Germany 2014–2022 is used to elaborate the conceptual argument. The CAP has saliently incorporated environmental objectives, while implementation, including the calibrations of most instruments within predetermined corridors, is left to member states. A systematic meta‐review of 142 texts evaluating policy instruments and calibrations in the CAP 2014–2022, focusing on Germany, found that several CAP instruments link most farm income support to pro‐environmental behavior. These instruments could potentially have high environmental effectiveness and efficiency. But actual policy calibrations delivered weak EPI due to low stringency and specificity, while high flexibility and temporal accommodation of farmers' needs might support EPI by increasing acceptance. Weak EPI resulted from instrument calibrations in the face of unavoidable trade‐offs between competing objectives. Our results demonstrate that calibrations can significantly affect the strength of EPI adoption, and the priorities within policy mixes more generally.
收起
摘要 :
Abstract This paper contributes to two recently identified gaps in policy design literature. First, an approach to measuring understudied specific on-the-ground measures, namely policy settings and calibrations, is developed, with...
展开
Abstract This paper contributes to two recently identified gaps in policy design literature. First, an approach to measuring understudied specific on-the-ground measures, namely policy settings and calibrations, is developed, with particular attention to “calibration flexibility.” Second, with this better understanding of policy design, an emerging policy design causal mechanism perspective can be further elaborated upon. On-the-ground measures of the same policy instrument—Research of Excellence Centers programs are compared across six different countries. Introduced in many OECD countries in the 1990s, Centers of Excellence were implemented with the goal of reversing the trend of “brain drain” and retaining highly mobile scholars. A theory-building process tracing approach is adopted in order to identify first- and second-order mechanisms related to pursuit of the broad policy goals of retaining and attracting scientific talent along with improving research capacity.
收起
摘要 :
The ecological literature reveals considerable confusion about the meaning of validation in the context of simulation models. The confusion arises as much from semantic and philosophical considerations as from the selection of val...
展开
The ecological literature reveals considerable confusion about the meaning of validation in the context of simulation models. The confusion arises as much from semantic and philosophical considerations as from the selection of validation procedures. Validation is not a procedure for testing scientific theory or for certifying the 'truth' of current scientific understanding, nor is it a required activity of every modelling project. Validation means that a model is acceptable for its intended use because it meets specified performance requirements. Before validation is undertaken, (1) the purpose of the model, (2) the performance criteria, and (3) the model context must be specified. The validation process can be decomposed into several components: (1) operation, (2) theory, and (3) data. Important concepts needed to understand the model evaluation process are verification, calibration, validation, credibility, and qualification. These terms are defined in a limited technical sense applicable to the evaluation of simulation models, and not as general philosophical concepts. Different tests and standards are applied to the operational, theoretical, and data components. The operational and data components can be validated; the theoretical component cannot. The most common problem with ecological and environmental models is failure to state what the validation criteria are. Criteria must be explicitly stated because there are no universal standards for selecting what test procedures or criteria to use for validation. A test based on comparison of simulated versus observed data is generally included whenever possible. Because the objective and subjective components of validation are not mutually exclusive, disagreements over the meaning of validation can only be resolved by establishing a convention. [References: 51]
收起
摘要 :
Land-use-change drivers related to institutional dynamics, including historical path dependencies and political dynamics associated with urban land transformation, are difficult to relate to specific spatial locations and thus are...
展开
Land-use-change drivers related to institutional dynamics, including historical path dependencies and political dynamics associated with urban land transformation, are difficult to relate to specific spatial locations and thus are not easily included in spatial models of urban land-use change. In this paper we describe a land-use model with variables representing such institutional dynamics in the Greater Boston region, a metropolitan area characterized by periurban sprawl, for the period 1985-99. An aggregate land-use model is developed at the municipal level, based on a narrative analysis drawn from in-depth interviews with town planners, state officials, and land developers, to explain land-development patterns documented over that study period using aerial photography. Explanatory variables, including town financial variables, school quality measures, and spatial variables associated with access and location, are linked to land-change outcomes through the selection environment framework, a framework borrowed from economic geography to describe how regional growth patterns are shaped by locally specific institutional, market, and spatial contexts that constrain individual land-use decision makers. Results of the analysis suggest that institutional dynamics associated with housing values and associated tax revenues, educational expenditures, and exclusive zoning practices significantly explain municipal land-use change in the suburban or periurban context.
收起
摘要 :
Calibration of measuring equipment is conducted by following some normative or applicable documents such as standards, manufacturer manuals and instructions, technical orders issued by defense organizations, or scientific papers. ...
展开
Calibration of measuring equipment is conducted by following some normative or applicable documents such as standards, manufacturer manuals and instructions, technical orders issued by defense organizations, or scientific papers. An accreditation body provides its recognition to the calibration laboratories by evaluating their technical competence and their compliance with the quality requirements of ISO/IEC 17025. The accreditation body must have defined criteria in order to evaluate different calibration methods which should ensure that the laboratories are performing the calibration in a technically competent manner when they are fully or even only partially based on the relevant reference documents. A discussion with different points of view about choosing the criteria, as well as the Israel Laboratory Accreditation Authority (ISRAC) policy on this issue, are presented.
收起
摘要 :
Quality assurance is an integrated part of any calibration facility. The calibration facility as well as its customers are interested in the facility production outgoing quality. In most calibration labs the inspection of calibrat...
展开
Quality assurance is an integrated part of any calibration facility. The calibration facility as well as its customers are interested in the facility production outgoing quality. In most calibration labs the inspection of calibrated items is performed according to a suitable sampling inspection policy. Some of these policies are very good in assuring the quality of the calibration services they offer, but do not provide a clear assessment of the outgoing quality of the entire production of the facility. This paper has developed two methods of calculating the average outgoing quality (AOQ) of a calibration lab that uses a multistage sampling inspection policy. The policy structure is presented first along with the exact procedure of how to perform it by the inspectors and the methods to calculate the AOQ. The two methods differ from each another in the type of data required to calculate the AOQ. The first method requires the technicians' production, the number of items subject to inspections and the number of failing items found. The second method requires only the number of technicians at each level of the multistage inspection policy. The verifications of the performances of two methods are accomplished by building a simulation model on an Excel worksheet. The model simulates the calibration facility with the right parameters, and then compares the two methods with the actual AOQ. The paper further discusses the advantages and disadvantages of each method in a broader context of quality assurance.
收起
摘要 :
We present an analytical framework to examine the open economy monetary policy rule of a central bank under asymmetric preferences. The resulting policy rule is then empirically examined using quarterly data with regard to Canada ...
展开
We present an analytical framework to examine the open economy monetary policy rule of a central bank under asymmetric preferences. The resulting policy rule is then empirically examined using quarterly data with regard to Canada and the UK from 1983q1 to 2007q4. Our empirical investigation shows that the open economy policy rule receives support from the data and that the monetary policy makers in the UK and Canada have asymmetric preferences. Robustness checks based on model calibration provide support for the suggested policy rule. Copyright (c) 2016 John Wiley & Sons, Ltd.
收起
摘要 :
Body weight measurement is fundamental in nutritional screening. Thus, weighing scales should be regularly calibrated. This procedure is so important that in 1990 the Council of Europe produced an ad hoc directive. Unfortunately, ...
展开
Body weight measurement is fundamental in nutritional screening. Thus, weighing scales should be regularly calibrated. This procedure is so important that in 1990 the Council of Europe produced an ad hoc directive. Unfortunately, little is known about scales management in hospitals. We performed an inventory in the City Hospital of Trento (∼900 beds), which is responsible for the healthcare of ∼250,000 inhabitants. The analysis included flat, chair and paediatric neonatal scales. We focused attention on the date of arrival and calibration management. In the hospital, there were 211 scales: 190 flat scales, 13 chair scales and 8 paediatric neonatal scales. The mean “age” was 10.3±7.3 years; 22.3% were 5–10 years old and 44.1% were aged >10 years. No scale was ever calibrated. They are managed by the “Internal Logistics Unit”, meaning that scales are regarded as pieces of furniture rather than as diagnostic tools. Accurate weight measurement is a key task in nutritional management. However, our results once highlight limitations in this process. It is not enough to design laws and accreditation standards for the European Community; enforcement should be also checked.
收起
摘要 :
Approximate computing, being able to tradeoff computation quality (e.g., accuracy) and computational effort (e.g., energy) for error-tolerant applications such as media processing and the emerging recognition, mining, and synthesi...
展开
Approximate computing, being able to tradeoff computation quality (e.g., accuracy) and computational effort (e.g., energy) for error-tolerant applications such as media processing and the emerging recognition, mining, and synthesis (RMS) applications, has gained significant traction in recent years. With approximate computing, we expect to obtain acceptable results, but how do we make sure the quality of the final results are good enough? This challenging problems remains largely unexplored. As many of the RMS applications employ iterative methods (IMs) for solution-finding, wherein a sequence of improving approximate solutions are generated before reaching the final converged solution, in this paper, we propose ApproxIt, a novel quality management framework of approximate computing dedicated for IMs with quality guarantees. ApproxIt is comprised of two stages: 1) offline stage and 2) online stage. To be specific, at offline stage, we first analyze the manifold of parameter space to identify the given problem as convex case or nonconvex case at the offline stage. And for each case, we propose the corresponding runtime dynamic quality calibration scheme and reconfiguration control policy. Then during runtime, our proposed lightweight quality estimator will evaluate the intermediate quality at specific calibration iteration, which is determined by the novel Markov model-based calibration scheme. If quality violation occurs, the configuration control policy will select the most appropriate approximate computing mode for the following iterations. With the proposed dynamic effort scaling technique, ApproxIt is able to dramatically improve application energy efficiency under quality guarantees, as demonstrated in our experimental results.
收起
摘要 :
Baseline assumptions play a crucial role in conducting consistent quantitative policy assessments for dynamic Computable General Equilibrium (CGE) models. Two essential factors that influence the determination of the baselines are...
展开
Baseline assumptions play a crucial role in conducting consistent quantitative policy assessments for dynamic Computable General Equilibrium (CGE) models. Two essential factors that influence the determination of the baselines are the data sources of projections and the applied calibration methods. We propose a general, Bayesian approach that can be employed to build a baseline for any recursive-dynamic CGE model. We use metamodeling techniques to transform the calibration problem into a tractable optimization problem while simultaneously reducing the computational costs. This transformation allows us to derive the exogenous model parameters that are needed to match the projections. We demonstrate how to apply the approach using a simple CGE and supply the full code. Additionally, we apply our method to a multi-region, multi-sector model and show that calibrated parameters matter as policy implications derived from simulations differ significantly between them.
收起